Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 826
Filtrar
2.
Pediatr Res ; 94(1): 96-98, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36550353

RESUMO

BACKGROUND: There are no generally accepted age-appropriate reference ranges for laboratory values in neonates. This also matters for drug development. The International Neonatal Consortium (INC) is engaged to define actionable reference ranges of commonly used laboratory values in neonates. METHODS: A structured literature search was performed to identify standards or recommendations for publications that present neonatal laboratory data to assess the publication quality of laboratory values in neonates. Using a modified Delphi approach, an assessment and data extraction instrument to screen on completeness of information was developed. RESULTS: On 2908 hits, 281 papers were retained for full reading and 257 for data extraction. None of the papers reported a publication standard. Using the extraction instrument, most papers presented single country or unit findings. The median number of neonates was 120, with uncertainty on single or repeated measurements. Clinically meaningful information on age, sex, and medical conditions was commonly provided. Information on pharmacotherapy, equipment, analytical method, or laboratory location was rarely mentioned. CONCLUSIONS: Published information on laboratory values for neonates is sparse, not systematic, and incomplete. This undermines efforts to compare treatments, safety monitoring, or clinical management. Furthermore, there appears to be no standard yet to report laboratory values in neonates. IMPACT: There are no generally accepted age-appropriate reference ranges for laboratory values in neonates, leading to a significant knowledge gap, also for safety reporting and drug development in neonates. We performed a literature search to identify standards or recommendations for publications on neonatal laboratory data and to assess the publication quality of laboratory values in clinical studies involving neonates. Standards or recommendations for publications that present neonatal laboratory data were not identified, while published information on laboratory values for neonates is sparse, not systematic, and incomplete.


Assuntos
Bibliometria , Neonatologia , Publicações , Humanos , Recém-Nascido , Publicações/normas , Valores de Referência , Técnica Delfos , Técnicas de Laboratório Clínico
3.
Gac Sanit ; 36(6): 506-511, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35584982

RESUMO

OBJECTIVE: The need to generate evidence related to COVID-19, the acceleration of publication and peer-review process and the competition between journals may have influenced the quality of COVID-19 papers. Our objective was to compare the characteristics of COVID-19 papers against those of non-COVID-19 papers and identify the variables in which they differ. METHOD: We conducted a journal-matched case-control study. Cases were COVID-19 papers and controls were non-COVID-19 papers published between March 2020 and January 2021. Journals belonging to five different Journal Citations Reports categories were selected. Within each selected journal, a COVID-19 paper (where there was one) and another non-COVID-19 paper were selected. Conditional logistic regression models were fitted. RESULTS: We included 81 COVID-19 and 143 non-COVID-19 papers. Descriptive observational studies and analytical observational studies had, respectively, a 55-fold (odds ratio [OR]: 55.12; 95% confidence interval [95%CI]: 7.41-409.84) and 19-fold (OR: 19.28; 95%CI: 3.09-120.31) higher likelihood of being COVID-19 papers, respectively, and also a higher probability of having a smaller sample size (OR: 7.15; 95%CI: 2.33-21.94). COVID-19 papers had a higher probability of being cited since their publication (OR: 4.97; 95%CI: 1.63-15.10). CONCLUSIONS: The characteristics of COVID-19 papers differed from those of non-COVID-19 papers published in the first months of the pandemic. In order to ensure the publication of good scientific evidence the quality of COVID-19-papers should be preserved.


Assuntos
COVID-19 , Publicações , Humanos , Estudos de Casos e Controles , COVID-19/epidemiologia , Estudos Observacionais como Assunto , Pandemias , Publicações Periódicas como Assunto , Publicações/normas , Publicações/estatística & dados numéricos
4.
Pharm. care Esp ; 24(2): 4-5, abr. 15, 2022.
Artigo em Espanhol | IBECS | ID: ibc-204750
5.
PLoS Biol ; 20(2): e3001562, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35180228

RESUMO

The power of language to modify the reader's perception of interpreting biomedical results cannot be underestimated. Misreporting and misinterpretation are pressing problems in randomized controlled trials (RCT) output. This may be partially related to the statistical significance paradigm used in clinical trials centered around a P value below 0.05 cutoff. Strict use of this P value may lead to strategies of clinical researchers to describe their clinical results with P values approaching but not reaching the threshold to be "almost significant." The question is how phrases expressing nonsignificant results have been reported in RCTs over the past 30 years. To this end, we conducted a quantitative analysis of English full texts containing 567,758 RCTs recorded in PubMed between 1990 and 2020 (81.5% of all published RCTs in PubMed). We determined the exact presence of 505 predefined phrases denoting results that approach but do not cross the line of formal statistical significance (P < 0.05). We modeled temporal trends in phrase data with Bayesian linear regression. Evidence for temporal change was obtained through Bayes factor (BF) analysis. In a randomly sampled subset, the associated P values were manually extracted. We identified 61,741 phrases in 49,134 RCTs indicating almost significant results (8.65%; 95% confidence interval (CI): 8.58% to 8.73%). The overall prevalence of these phrases remained stable over time, with the most prevalent phrases being "marginally significant" (in 7,735 RCTs), "all but significant" (7,015), "a nonsignificant trend" (3,442), "failed to reach statistical significance" (2,578), and "a strong trend" (1,700). The strongest evidence for an increased temporal prevalence was found for "a numerical trend," "a positive trend," "an increasing trend," and "nominally significant." In contrast, the phrases "all but significant," "approaches statistical significance," "did not quite reach statistical significance," "difference was apparent," "failed to reach statistical significance," and "not quite significant" decreased over time. In a random sampled subset of 29,000 phrases, the manually identified and corresponding 11,926 P values, 68,1% ranged between 0.05 and 0.15 (CI: 67. to 69.0; median 0.06). Our results show that RCT reports regularly contain specific phrases describing marginally nonsignificant results to report P values close to but above the dominant 0.05 cutoff. The fact that the prevalence of the phrases remained stable over time indicates that this practice of broadly interpreting P values close to a predefined threshold remains prevalent. To enhance responsible and transparent interpretation of RCT results, researchers, clinicians, reviewers, and editors may reduce the focus on formal statistical significance thresholds and stimulate reporting of P values with corresponding effect sizes and CIs and focus on the clinical relevance of the statistical difference found in RCTs.


Assuntos
PubMed/normas , Publicações/normas , Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Projetos de Pesquisa/normas , Relatório de Pesquisa/normas , Teorema de Bayes , Viés , Humanos , Modelos Lineares , Avaliação de Resultados em Cuidados de Saúde/métodos , Avaliação de Resultados em Cuidados de Saúde/normas , Avaliação de Resultados em Cuidados de Saúde/estatística & dados numéricos , PubMed/estatística & dados numéricos , Publicações/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Reprodutibilidade dos Testes
6.
PLoS Biol ; 20(2): e3001285, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35104285

RESUMO

Amid the Coronavirus Disease 2019 (COVID-19) pandemic, preprints in the biomedical sciences are being posted and accessed at unprecedented rates, drawing widespread attention from the general public, press, and policymakers for the first time. This phenomenon has sharpened long-standing questions about the reliability of information shared prior to journal peer review. Does the information shared in preprints typically withstand the scrutiny of peer review, or are conclusions likely to change in the version of record? We assessed preprints from bioRxiv and medRxiv that had been posted and subsequently published in a journal through April 30, 2020, representing the initial phase of the pandemic response. We utilised a combination of automatic and manual annotations to quantify how an article changed between the preprinted and published version. We found that the total number of figure panels and tables changed little between preprint and published articles. Moreover, the conclusions of 7.2% of non-COVID-19-related and 17.2% of COVID-19-related abstracts undergo a discrete change by the time of publication, but the majority of these changes do not qualitatively change the conclusions of the paper.


Assuntos
COVID-19/prevenção & controle , Disseminação de Informação/métodos , Revisão da Pesquisa por Pares/tendências , Publicações Periódicas como Assunto/tendências , Publicações/tendências , COVID-19/epidemiologia , COVID-19/virologia , Humanos , Pandemias/prevenção & controle , Revisão da Pesquisa por Pares/métodos , Revisão da Pesquisa por Pares/normas , Publicações Periódicas como Assunto/normas , Publicações Periódicas como Assunto/estatística & dados numéricos , Publicações/normas , Publicações/estatística & dados numéricos , Editoração/normas , Editoração/estatística & dados numéricos , Editoração/tendências , SARS-CoV-2/isolamento & purificação , SARS-CoV-2/fisiologia
7.
PLoS Biol ; 20(2): e3001470, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-35104289

RESUMO

Preprints allow researchers to make their findings available to the scientific community before they have undergone peer review. Studies on preprints within bioRxiv have been largely focused on article metadata and how often these preprints are downloaded, cited, published, and discussed online. A missing element that has yet to be examined is the language contained within the bioRxiv preprint repository. We sought to compare and contrast linguistic features within bioRxiv preprints to published biomedical text as a whole as this is an excellent opportunity to examine how peer review changes these documents. The most prevalent features that changed appear to be associated with typesetting and mentions of supporting information sections or additional files. In addition to text comparison, we created document embeddings derived from a preprint-trained word2vec model. We found that these embeddings are able to parse out different scientific approaches and concepts, link unannotated preprint-peer-reviewed article pairs, and identify journals that publish linguistically similar papers to a given preprint. We also used these embeddings to examine factors associated with the time elapsed between the posting of a first preprint and the appearance of a peer-reviewed publication. We found that preprints with more versions posted and more textual changes took longer to publish. Lastly, we constructed a web application (https://greenelab.github.io/preprint-similarity-search/) that allows users to identify which journals and articles that are most linguistically similar to a bioRxiv or medRxiv preprint as well as observe where the preprint would be positioned within a published article landscape.


Assuntos
Idioma , Revisão da Pesquisa por Pares , Pré-Publicações como Assunto , Pesquisa Biomédica , Publicações/normas , Terminologia como Assunto
10.
Proc Natl Acad Sci U S A ; 118(39)2021 09 28.
Artigo em Inglês | MEDLINE | ID: mdl-34544861

RESUMO

Unbiased science dissemination has the potential to alleviate some of the known gender disparities in academia by exposing female scholars' work to other scientists and the public. And yet, we lack comprehensive understanding of the relationship between gender and science dissemination online. Our large-scale analyses, encompassing half a million scholars, revealed that female scholars' work is mentioned less frequently than male scholars' work in all research areas. When exploring the characteristics associated with online success, we found that the impact of prior work, social capital, and gendered tie formation in coauthorship networks are linked with online success for men, but not for women-even in the areas with the highest female representation. These results suggest that while men's scientific impact and collaboration networks are associated with higher visibility online, there are no universally identifiable facets associated with success for women. Our comprehensive empirical evidence indicates that the gender gap in online science dissemination is coupled with a lack of understanding the characteristics that are linked with female scholars' success, which might hinder efforts to close the gender gap in visibility.


Assuntos
Autoria/normas , Sistemas On-Line/normas , Revisão da Pesquisa por Pares/tendências , Publicações/normas , Ciência/normas , Sexismo/prevenção & controle , Feminino , Humanos , Masculino
11.
PLoS One ; 16(9): e0257093, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34555033

RESUMO

OBJECTIVE: To evaluate the reporting quality of randomized controlled trials (RCTs) regarding patients with COVID-19 and analyse the influence factors. METHODS: PubMed, Embase, Web of Science and the Cochrane Library databases were searched to collect RCTs regarding patients with COVID-19. The retrieval time was from the inception to December 1, 2020. The CONSORT 2010 statement was used to evaluate the overall reporting quality of these RCTs. RESULTS: 53 RCTs were included. The study showed that the average reporting rate for 37 items in CONSORT checklist was 53.85% with mean overall adherence score of 13.02±3.546 (ranged: 7 to 22). The multivariate linear regression analysis showed the overall adherence score to the CONSORT guideline was associated with journal impact factor (P = 0.006), and endorsement of CONSORT statement (P = 0.014). CONCLUSION: Although many RCTs of COVID-19 have been published in different journals, the overall reporting quality of these articles was suboptimal, it can not provide valid evidence for clinical decision-making and systematic reviews. Therefore, more journals should endorse the CONSORT statement, authors should strictly follow the relevant provisions of the CONSORT guideline when reporting articles. Future RCTs should particularly focus on improvement of detailed reporting in allocation concealment, blinding and estimation of sample size.


Assuntos
COVID-19/epidemiologia , Publicações/normas , Editoração/normas , Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Gerenciamento de Dados/normas , Fidelidade a Diretrizes/normas , Humanos , Fator de Impacto de Revistas , PubMed/normas , SARS-CoV-2/patogenicidade
13.
Asian Pac J Cancer Prev ; 22(8): 2385-2389, 2021 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-34452550

RESUMO

BACKGROUND: Breast cancer has a rich history of research over the past 75 years. Many studies have had disruptive influences on the field itself. Our study employs a new, validated measurement to determine the most disruptive publications within the field of breast cancer. MATERIALS AND METHODS: PubMed® database was queried for articles between 1954-2014 related to breast cancer with in 21 different journals deemed important to the field. Articles were then scored for disruption and citation count. The top 100 most disruptive and cited publications were compiled and analyzed. RESULTS: Disruption score was a distinct measurement from citation count and had low level of correlation. Disruptive publications tended to skew older with the median year of publication in 1977. The score identified a variety of study designs and publication types within multiple journals. CONCLUSIONS: Measurement of the disruptive quality of a publication is a new way to describe academic impact of a publication and is distinct from citation count. Used in conjunction with citation count in may give a more descriptive bibliometric assessment of the literature. Further exploration within the field of oncology is warranted.
.


Assuntos
Pesquisa Biomédica/normas , Neoplasias da Mama/diagnóstico , Neoplasias da Mama/terapia , Bases de Dados Factuais , Fator de Impacto de Revistas , Publicações Periódicas como Assunto/normas , Publicações/normas , Feminino , Humanos
14.
BMC Cancer ; 21(1): 889, 2021 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-34344325

RESUMO

BACKGROUND: Identifying ineffective practices that have been used in oncology is important in reducing wasted resources and harm. We sought to examine the prevalence of practices that are being used but have been shown in RCTs to be ineffective (medical reversals) in published oncology studies. METHODS: We cross-sectionally analyzed studies published in three high-impact oncology medical journals (2009-2018). We abstracted data relating to the frequency and characterization of medical reversals. RESULTS: Of the 64 oncology reversals, medications (44%) represented the most common intervention type (39% were targeted). Fourteen (22%) were funded by pharmaceutical/industry only and 56% were funded by an organization other than pharmaceutical/industry. The median number of years that the practice had been in use prior to the reversal study was 9 years (range 1-50 years). CONCLUSION: Here we show that oncology reversals most often involve the administration of medications, have been practiced for years, and are often identified through studies funded by non-industry organizations.


Assuntos
Oncologia , Publicações Periódicas como Assunto/estatística & dados numéricos , Publicações/estatística & dados numéricos , Pesquisa/estatística & dados numéricos , Pesquisa/normas , Estudos Transversais , Humanos , Oncologia/estatística & dados numéricos , Publicações/normas
15.
J Clin Epidemiol ; 139: 130-139, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34229092

RESUMO

OBJECTIVE: This study (MEasurement Reactions In Trials) aimed to produce recommendations on how best to minimize bias from measurement reactivity (MR) in randomized controlled trials of interventions to improve health. STUDY DESIGN AND SETTING: The MERIT study consisted of: (1) an updated systematic review that examined whether measuring participants had effects on participants' health-related behaviors, relative to no-measurement controls, and three rapid reviews to identify: (i) existing guidance on MR; (ii) existing systematic reviews of studies that have quantified the effects of measurement on behavioral or affective outcomes; and (iii) studies that have investigated the effects of objective measurements of behavior on health-related behavior; (2) a Delphi study to identify the scope of the recommendations; and (3) an expert workshop in October 2018 to discuss potential recommendations in groups. RESULTS: Fourteen recommendations were produced by the expert group to: (1) identify whether bias is likely to be a problem for a trial; (2) decide whether to collect data about whether bias is likely to be a problem; (3) design trials to minimize the likelihood of this bias. CONCLUSION: These recommendations raise awareness of how and where taking measurements can produce bias in trials, and are thus helpful for trial design.


Assuntos
Pesquisa Biomédica/normas , Confiabilidade dos Dados , Guias como Assunto , Viés de Publicação , Publicações/normas , Ensaios Clínicos Controlados Aleatórios como Assunto/normas , Relatório de Pesquisa/normas , Pesquisa Biomédica/estatística & dados numéricos , Humanos , Publicações/estatística & dados numéricos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos
16.
Am J Emerg Med ; 49: 338-342, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34229241

RESUMO

BACKGROUND: Some studies have suggested gender disparities in both pay and academic promotion which may adversely affect salary and career progression for female physicians. The areas of research output, funding, and authorship have not been fully and systematically examined in the emergency medicine literature. We hypothesize that gender differences may exist in research output, impact, authorship, and funding. METHODS: We conducted a cross-sectional study examining all published articles in the top three emergency medicine journals as determined by Impact Factor between February 2015 and February 2018. We compared the authorship, number of citations of each article, funding, and h-index of each author by gender. RESULTS: Of the 10,118 authors representing 4166 original articles in our sample, 7562 (74.7%) were male and 2556 (25.3%) were female, with females underrepresented relative to the known proportion of female emergency medicine faculty. Males were proportionally more likely to be last authors (OR 1.65, 95% CI, 1.47-1.86) and less likely to be first authors than females (OR 0.85, 95% CI, 0.77-0.94). No difference in proportions of males and females in terms of being named as having funding was found (OR 1.02, 95% CI, 0.78-1.35). Males had higher h-indexes than females (5 vs. 3, p < .001) as well as a higher average number of citations (OR 1.068, 95% CI, 1.018-1.119). CONCLUSIONS: Males outnumber females in terms of numbers of publications, but also in number of citations, h-index, and last authorship. Future studies on physician gender disparities in emergency medicine need to account for these population differences.


Assuntos
Medicina de Emergência/estatística & dados numéricos , Publicações/normas , Caracteres Sexuais , Estudos Transversais , Feminino , Humanos , Masculino , Publicações Periódicas como Assunto/estatística & dados numéricos , Publicações/estatística & dados numéricos , Sexismo/psicologia , Sexismo/estatística & dados numéricos
17.
J Clin Epidemiol ; 136: 189-202, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34033915

RESUMO

OBJECTIVE: To give an overview of the available methods to investigate research misconduct in health-related research. STUDY DESIGN AND SETTING: In this scoping review, we conducted a literature search in MEDLINE, Embase, The Cochrane CENTRAL Register of Studies Online (CRSO), and The Virtual Health Library portal up to July 2020. We included papers that mentioned and/or described methods for screening or assessing research misconduct in health-related research. We categorized identified methods into the following four groups according to their scopes: overall concern, textual concern, image concern, and data concern. RESULTS: We included 57 papers reporting on 27 methods: two on overall concern, four on textual concern, three on image concern, and 18 on data concern. Apart from the methods to locate textual plagiarism and image manipulation, all other methods, be it theoretical or empirical, are based on examples, are not standardized, and lack formal validation. CONCLUSION: Existing methods cover a wide range of issues regarding research misconduct. Although measures to counteract textual plagiarism are well implemented, tools to investigate other forms of research misconduct are rudimentary and labour-intensive. To cope with the rising challenge of research misconduct, further development of automatic tools and routine validation of these methods is needed. TRIAL REGISTRATION NUMBER: Center for Open Science (OSF) (https://osf.io/mq89w).


Assuntos
Pesquisa Biomédica/estatística & dados numéricos , Pesquisa Biomédica/normas , Plágio , Publicações/estatística & dados numéricos , Publicações/normas , Má Conduta Científica/estatística & dados numéricos
19.
PLoS Biol ; 19(4): e3001162, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33872298

RESUMO

Many randomized controlled trials (RCTs) are biased and difficult to reproduce due to methodological flaws and poor reporting. There is increasing attention for responsible research practices and implementation of reporting guidelines, but whether these efforts have improved the methodological quality of RCTs (e.g., lower risk of bias) is unknown. We, therefore, mapped risk-of-bias trends over time in RCT publications in relation to journal and author characteristics. Meta-information of 176,620 RCTs published between 1966 and 2018 was extracted. The risk-of-bias probability (random sequence generation, allocation concealment, blinding of patients/personnel, and blinding of outcome assessment) was assessed using a risk-of-bias machine learning tool. This tool was simultaneously validated using 63,327 human risk-of-bias assessments obtained from 17,394 RCTs evaluated in the Cochrane Database of Systematic Reviews (CDSR). Moreover, RCT registration and CONSORT Statement reporting were assessed using automated searches. Publication characteristics included the number of authors, journal impact factor (JIF), and medical discipline. The annual number of published RCTs substantially increased over 4 decades, accompanied by increases in authors (5.2 to 7.8) and institutions (2.9 to 4.8). The risk of bias remained present in most RCTs but decreased over time for allocation concealment (63% to 51%), random sequence generation (57% to 36%), and blinding of outcome assessment (58% to 52%). Trial registration (37% to 47%) and the use of the CONSORT Statement (1% to 20%) also rapidly increased. In journals with a higher impact factor (>10), the risk of bias was consistently lower with higher levels of RCT registration and the use of the CONSORT Statement. Automated risk-of-bias predictions had accuracies above 70% for allocation concealment (70.7%), random sequence generation (72.1%), and blinding of patients/personnel (79.8%), but not for blinding of outcome assessment (62.7%). In conclusion, the likelihood of bias in RCTs has generally decreased over the last decades. This optimistic trend may be driven by increased knowledge augmented by mandatory trial registration and more stringent reporting guidelines and journal requirements. Nevertheless, relatively high probabilities of bias remain, particularly in journals with lower impact factors. This emphasizes that further improvement of RCT registration, conduct, and reporting is still urgently needed.


Assuntos
Publicações , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Viés , Bibliometria , Confiabilidade dos Dados , Gerenciamento de Dados/história , Gerenciamento de Dados/métodos , Gerenciamento de Dados/normas , Gerenciamento de Dados/tendências , Bases de Dados Bibliográficas/história , Bases de Dados Bibliográficas/normas , Bases de Dados Bibliográficas/tendências , História do Século XX , História do Século XXI , Humanos , Avaliação de Resultados em Cuidados de Saúde , Registros Públicos de Dados de Cuidados de Saúde , Publicações/história , Publicações/normas , Publicações/estatística & dados numéricos , Publicações/tendências , Melhoria de Qualidade/história , Melhoria de Qualidade/tendências , Ensaios Clínicos Controlados Aleatórios como Assunto/história , Revisões Sistemáticas como Assunto
20.
Med Sci (Paris) ; 37(4): 315-316, 2021 04.
Artigo em Francês | MEDLINE | ID: mdl-33908844
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...